ICML Exploration & Exploitation Challenge: Keep it simple!
نویسندگان
چکیده
Recommendation has become a key feature in the economy of a lot of companies (online shopping, search engines...). There is a lot of work going on regarding recommender systems and there is still a lot to do to improve them. Indeed nowadays in many companies most of the job is done by hand. Moreover even when a supposedly smart recommender system is designed, it is hard to evaluate it without using real audience which obviously involves economic issues. The ICML Exploration & Exploitation challenge is an attempt to make people propose efficient recommendation techniques and particularly focuses on limited computational resources. The challenge also proposes a framework to address the problem of evaluating a recommendation algorithm with real data. We took part in this challenge and achieved the best performances; this paper aims at reporting on this achievement; we also discuss the evaluation process and propose a better one for future challenges of the same kind.
منابع مشابه
Stumping along a Summary for Exploration & Exploitation Challenge 2011
The Pascal Exploration & Exploitation challenge 2011 seeks to evaluate algorithms for the online website content selection problem. This article presents the solution we used to achieve second place in this challenge and some side-experiments we performed. The methods we evaluated are all structured in three layers. The first layer provides an online summary of the data stream for continuous an...
متن کاملBalancing between Estimated Reward and Uncertainty during News Article Recommendation for ICML 2012 Exploration and Exploitation Challenge
Recommending relevant contents to users automatically in a web service is an important aspect that links with the income of many internet companies. The ICML 2012 Exploration & Exploitation Workshop holds an open challenge that aims at building stateof-the-art news article recommendation system on the Yahoo! platform. We propose an efficient scoring model that recommends the news article with t...
متن کاملAlgorithm-Directed Exploration for Model-Based Reinforcement Learning in Factored MDPs
One of the central challenges in reinforcement learning is to balance the exploration/exploitation tradeoff while scaling up to large problems. Although model-based reinforcement learning has been less prominent than value-based methods in addressing these challenges, recent progress has generated renewed interest in pursuing modelbased approaches: Theoretical work on the exploration/exploitati...
متن کاملAugmented Downhill Simplex a Modified Heuristic Optimization Method
Augmented Downhill Simplex Method (ADSM) is introduced here, that is a heuristic combination of Downhill Simplex Method (DSM) with Random Search algorithm. In fact, DSM is an interpretable nonlinear local optimization method. However, it is a local exploitation algorithm; so, it can be trapped in a local minimum. In contrast, random search is a global exploration, but less efficient. Here, rand...
متن کاملPAC-Bayes-Bernstein Inequality for Martingales and its Application to Multiarmed Bandits
We combine PAC-Bayesian analysis with a Bernstein-type inequality for martingales to obtain a result that makes it possible to control the concentration of multiple (possibly uncountably many) simultaneously evolving and interdependent martingales. We apply this result to derive a regret bound for the multiarmed bandit problem. Our result forms a basis for integrative simultaneous analysis of e...
متن کامل